testing accuracy
- Education (0.46)
- Information Technology (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Data Science (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Clustering (0.68)
Appendix [KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training ] Anonymous Author(s) Affiliation Address email Appendix A. Proof of Lemma 1
Table 1 summarizes the models and datasets used in this work. ImageNet-1K Deng u. a. (2009): We use the subset of the ImageNet dataset containing DeepCAM Kurth u. a. (2018): DeepCAM dataset for image segmentation, which consists of Fractal-3K Kataoka u. a. (2022) A rendered dataset from the Visual Atom method Kataoka We also use the setting in Kataoka u. a. (2022) Table 2 shows the detail of our hyper-parameters. Specifically, We follow the guideline of'TorchVision' to train the ResNet-50 that uses the CosineLR To show the robustness of KAKURENBO, we also train ResNet-50 with different settings, e.g., ResNet-50 (A) setting, we follow the hyper-parameters reported in Goyal u. a. (2017). It is worth noting that KAKURENBO merely hides samples before the input pipeline. In this section, we present an analysis of the factors affecting KAKURENBO's performance, e.g., the The result shows that our method could dynamically hide the samples at each epoch.
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Asia > Japan (0.04)
- North America > United States (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Asia > Middle East > Lebanon (0.04)
- Asia > China > Shandong Province > Qingdao (0.04)
- North America > United States > California > Orange County > Irvine (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback
Communication overhead is a major bottleneck hampering the scalability of distributed machine learning systems. Recently, there has been a surge of interest in using gradient compression to improve the communication efficiency of distributed neural network training. Using 1-bit quantization, signSGD with majority vote achieves a 32x reduction in communication cost. However, its convergence is based on unrealistic assumptions and can diverge in practice. In this paper, we propose a general distributed compressed SGD with Nesterov's momentum.
Stacked Ensemble of Fine-Tuned CNNs for Knee Osteoarthritis Severity Grading
Gupta, Adarsh, Kaur, Japleen, Doshi, Tanvi, Sharma, Teena, Verma, Nishchal K., Vasikarla, Shantaram
Abstract--Knee Osteoarthritis (KOA) is a musculoskeletal condition that can cause significant limitations and impairments in daily activities, especially among older individuals. T o evaluate the severity of KOA, typically, X-ray images of the affected knee are analyzed, and a grade is assigned based on the Kellgren-Lawrence (KL) grading system, which classifies KOA severity into five levels, ranging from 0 to 4. This approach requires a high level of expertise and time and is susceptible to subjective interpretation, thereby introducing potential diagnostic inaccuracies. T o address this problem a stacked ensemble model of fine-tuned Convolutional Neural Networks (CNNs) was developed for two classification tasks: a binary classifier for detecting the presence of KOA, and a multiclass classifier for precise grading across the KL spectrum. The proposed stacked ensemble model consists of a diverse set of pre-trained architectures, including MobileNetV2, Y ou Only Look Once (YOLOv8), and DenseNet201 as base learners and Categorical Boosting (CatBoost) as the meta-learner . This proposed model had a balanced test accuracy of 73% in multiclass classification and 87.5% in binary classification, which is higher than previous works in extant literature. Knee Osteoarthritis (KOA) [1] is a degenerative musculoskeletal joint disease in which the knee cartilage breaks down over time.
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- (10 more...)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Florida > Alachua County > Gainesville (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > Middle East > Lebanon (0.04)
- Research Report > Experimental Study (0.46)
- Research Report > New Finding (0.46)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (0.94)
- Health & Medicine > Diagnostic Medicine > Imaging (0.69)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Vision (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)